38 research outputs found

    Beta Burr XII OR Five Parameter Beta Lomax Distribution: Remarks and Characterizations

    Get PDF
    The distributions taken up in two recently published papers are compared and certain characterizations of them are presented. These characterizations are based on: (i) a simple relationship between two truncated moments; (ii) truncated moments of certain functions of the nth order statistic; (iii) truncated moments of certain functions of the random variable

    Characterizations of Distributions via Conditional Expectation of Generalized Order Statistics

    Get PDF
    Characterizations of probability distributions by different regression conditions on generalized order statistics has attracted the attention of many researchers. We present here, characterizations of certain continuous distributions based on the conditional expectation of generalized order statistics

    Regularized Multivariate Regression Models with Skew-\u3cem\u3et\u3c/em\u3e Error Distributions

    Get PDF
    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector

    Progressive Reliability Method and Its Application to Offshore Mooring Systems

    Get PDF
    Assessing the reliability of complex systems (e.g. structures) is essential for a reliability-based optimal design that balances safety and costs of such systems. This paper proposes the Progressive Reliability Method (PRM) for the quantification of the reliability of complex systems. The proposed method is a closed-form solution for calculating the probability of failure. The new method is flexible to the definition of “failure” (i.e., can consider serviceability and ultimate-strength failures) and uses the rules of probability theory to estimate the failure probability of the system or its components. The method is first discussed in general and then illustrated in two examples, including a case study to find the safest configuration and orientation of a 12-line offshore mooring system. The PRM results are compared with results of a similar assessment based on the Monte Carlo simulations. It is shown in the example of two-component that using PRM, the importance of system components to system safety can be quantified and compared as input information for maintenance planning

    Robust Estimation of the Correlation Matrix of Longitudinal Data

    Get PDF
    We propose a double-robust procedure for modeling the correlation matrix of a longitudinal dataset. It is based on an alternative Cholesky decomposition of the form Σ=DLL ⊤ D where D is a diagonal matrix proportional to the square roots of the diagonal entries of Σ and L is a unit lower-triangular matrix determining solely the correlation matrix. The first robustness is with respect to model misspecification for the innovation variances in D, and the second is robustness to outliers in the data. The latter is handled using heavy-tailed multivariate t-distributions with unknown degrees of freedom. We develop a Fisher scoring algorithm for computing the maximum likelihood estimator of the parameters when the nonredundant and unconstrained entries of (L,D) are modeled parsimoniously using covariates. We compare our results with those based on the modified Cholesky decomposition of the form LD 2 L ⊤ using simulations and a real dataset

    Empirical Bayes and Hierarchical Bayes Estimation of Skew Normal Populations

    Get PDF
    We develop empirical and hierarchical Bayesian methodologies for the skew normal populations through the EM algorithm and the Gibbs sampler. A general concept of skewness to the normal distribution is considered throughout. Motivations are given for considering the skew normal population in applications, and an example is presented to demonstrate why the skew normal distribution is more applicable than the normal distribution for certain applications

    Remarks on Characterizations of Malinowska and Szynal

    Get PDF
    The problem of characterizing a distribution is an important problem which has recently attracted the attention of many researchers. Thus, various characterizations have been established in many different directions. An investigator will be vitally interested to know if their model fits the requirements of a particular distribution. To this end, one will depend on the characterizations of this distribution which provide conditions under which the underlying distribution is indeed that particular distribution. In this work, several characterizations of Malinowska and Szynal (2008) for certain general classes of distributions are revisited and simpler proofs of them are presented. These characterizations are not based on conditional expectation of the kth lower record values (as in Malinowska and Szynal), they are based on: (i) simple truncated moments of the random variable, (ii) hazard function

    Integrating Data Transformation in Principal Components Analysis

    Get PDF
    Principal component analysis (PCA) is a popular dimension-reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples. Supplementary materials for this article are available online

    Analyzing Multiple-Probe Microarray: Estimation and Application of Gene Expression Indexes

    Get PDF
    Gene expression index estimation is an essential step in analyzing multiple probe microarray data. Various modeling methods have been proposed in this area. Amidst all, a popular method proposed in Li and Wong (2001) is based on a multiplicative model, which is similar to the additive model discussed in Irizarry et al. (2003a) at the logarithm scale. Along this line, Hu et al. (2006) proposed data transformation to improve expression index estimation based on an ad hoc entropy criteria and naive grid search approach. In this work, we re-examined this problem using a new profile likelihood-based transformation estimation approach that is more statistically elegant and computationally efficient. We demonstrate the applicability of the proposed method using a benchmark Affymetrix U95A spiked-in experiment. Moreover, We introduced a new multivariate expression index and used the empirical study to shows its promise in terms of improving model fitting and power of detecting differential expression over the commonly used univariate expression index. As the other important content of the work, we discussed two generally encountered practical issues in application of gene expression index: normalization and summary statistic used for detecting differential expression. Our empirical study shows somewhat different findings from the MAQC project (MAQC, 2006)

    Assessing Protein Conformational Sampling Methods Based on Bivariate Lag-Distributions of Backbone Angles

    Get PDF
    Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence–structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu.edu/∼madoliat/LagSVD) that can be used to produce informative animations
    corecore